- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources5
- Resource Type
-
0004000001000000
- More
- Availability
-
50
- Author / Contributor
- Filter by Author / Creator
-
-
Blakeney, Cody (4)
-
Zong, Ziliang (4)
-
Li, Xiaomin (3)
-
Yan, Yan (3)
-
Atkinson Gentry (1)
-
Blakeney Cody (1)
-
Huish Nathaniel (1)
-
Metsis Vangelis (1)
-
Yan Yan (1)
-
Z.L. Zong (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
& Ahmed, Khadija. (0)
-
& Aina, D.K. Jr. (0)
-
& Akcil-Okan, O. (0)
-
- Filter by Editor
-
-
null (1)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Blakeney, Cody; Li, Xiaomin; Yan, Yan; Zong, Ziliang (, IEEE Transactions on Parallel and Distributed Systems)null (Ed.)
-
Blakeney, Cody; Li, Xiaomin; Yan, Yan; Zong, Ziliang (, IEEE International Conference on Edge Computing and Scalable Cloud)
-
Li, Xiaomin; Blakeney, Cody; Zong, Ziliang (, The Resource-Constrained Machine Learning Workshop in conjunction with IEEE Conference on Machine Learning and Systems (MLSys’20))
-
Blakeney, Cody; Yan, Yan; Zong, Ziliang (, IEEE Winter Conference on Applications of Computer Vision (WACV ’20))Unstructured neural network pruning is an effective technique that can significantly reduce theoretical model size, computation demand and energy consumption of large neural networks without compromising accuracy. However, a number of fundamental questions about pruning are not answered yet. For example, do the pruned neural networks contain the same representations as the original network? Is pruning a compression or evolution process? Does pruning only work on trained neural networks? What is the role and value of the uncovered sparsity structure? In this paper, we strive to answer these questions by analyzing three unstructured pruning methods (magnitude based pruning, post-pruning re-initialization, and random sparse initialization). We conduct extensive experiments using the Singular Vector Canonical Correlation Analysis (SVCCA) tool to study and contrast layer representations of pruned and original ResNet, VGG, and ConvNet models. We have several interesting observations: 1) Pruned neural network models evolve to substantially different representations while still maintaining similar accuracy. 2) Initialized sparse models can achieve reasonably good accuracy compared to well engineered pruning methods. 3) Sparsity structures discovered by pruning models are not inherently important or useful.more » « less
An official website of the United States government

Full Text Available